We present X-Decoder, a generalized decoding model that can predict pixel-level segmentation and language tokens seamlessly. X-Decodert takes as input two types of queries: (i) generic non-semantic queries and (ii) semantic queries induced from text inputs, to decode different pixel-level and token-level outputs in the same semantic space. With such a novel design, X-Decoder is the first work that provides a unified way to support all types of image segmentation and a variety of vision-language (VL) tasks. Further, our design enables seamless interactions across tasks at different granularities and brings mutual benefits by learning a common and rich pixel-level visual-semantic understanding space, without any pseudo-labeling. After pretraining on a mixed set of a limited amount of segmentation data and millions of image-text pairs, X-Decoder exhibits strong transferability to a wide range of downstream tasks in both zero-shot and finetuning settings. Notably, it achieves (1) state-of-the-art results on open-vocabulary segmentation and referring segmentation on eight datasets; (2) better or competitive finetuned performance to other generalist and specialist models on segmentation and VL tasks; and (3) flexibility for efficient finetuning and novel task composition (e.g., referring captioning and image editing). Code, demo, video, and visualization are available at https://x-decoder-vl.github.io.
translated by 谷歌翻译
With increasing privacy concerns on data, recent studies have made significant progress using federated learning (FL) on privacy-sensitive natural language processing (NLP) tasks. Much literature suggests fully fine-tuning pre-trained language models (PLMs) in the FL paradigm can mitigate the data heterogeneity problem and close the performance gap with centralized training. However, large PLMs bring the curse of prohibitive communication overhead and local model adaptation costs for the FL system. To this end, we introduce various parameter-efficient tuning (PETuning) methods into federated learning. Specifically, we provide a holistic empirical study of representative PLMs tuning methods in FL. The experimental results cover the analysis of data heterogeneity levels, data scales, and different FL scenarios. Overall communication overhead can be significantly reduced by locally tuning and globally aggregating lightweight model parameters while maintaining acceptable performance in various FL settings. To facilitate the research of PETuning in FL, we also develop a federated tuning framework FedPETuning, which allows practitioners to exploit different PETuning methods under the FL training paradigm conveniently. The source code is available at \url{https://github.com/iezhuozhuo/FedETuning/tree/deltaTuning}.
translated by 谷歌翻译
方面情感三胞胎提取(ASTE)旨在提取方面,意见及其情感关系作为情感三胞胎的跨度。现有的作品通常将跨度检测作为1D令牌标记问题制定,并使用令牌对的2D标记矩阵对情感识别进行建模。此外,通过利用诸如伯特(Bert)之类的审计语言编码器(PLES)的代表形式,它们可以实现更好的性能。但是,他们只是利用将功能提取器作为提取器来构建其模块,但从未深入了解特定知识所包含的内容。在本文中,我们争辩说,与其进一步设计模块以捕获ASTE的电感偏见,不如包含“足够”的“足够”功能,用于1D和2D标记:(1)令牌表示包含令牌本身的上下文含义,因此此级别,因此此级别功能带有必要的信息以进行1D标记。 (2)不同PLE层的注意力矩阵可以进一步捕获令牌对中存在的多层次语言知识,从而使2D标记受益。 (3)此外,对于简单的转换,这两个功能也可以很容易地转换为2D标记矩阵和1D标记序列。这将进一步提高标签结果。通过这样做,PLE可以是自然的标记框架并实现新的最新状态,通过广泛的实验和深入分析来验证。
translated by 谷歌翻译
在这份技术报告中,我们介绍了数字写作助手(高效且智能编辑),该助手通过使用人工智能(AI)技术来促进用户更有效地编写更高质量的文本。以前的写作助理通常提供错误检查的功能(以检测和纠正拼写和语法错误)和有限的文本练习功能。随着大型神经语言模型的出现,一些系统支持自动完成句子或段落。在Effidit中,我们通过提供五个类别的功能来显着扩展写作助手的能力:文本完成,错误检查,文本抛光,关键字到句子(K2S)和云输入方法(Cloud IME)。在文本完成类别中,Effidit支持基于生成的句子完成,基于检索的句子完成和短语完成。相比之下,到目前为止,许多其他写作助理仅提供三个功能中的一两个。对于文本抛光,我们具有三个函数:(上下文感知)短语抛光,句子释义和句子扩展,而其他许多写作助手通常会在此类别中支持一两个功能。本报告的主要内容包括象征的主要模块,实施这些模块的方法以及一些关键方法的评估结果。
translated by 谷歌翻译
为了更好地利用搜索日志和建模用户的行为模式,提出了许多点击模型来提取用户的隐式交互反馈。大多数传统点击模型都是基于概率图形模型(PGM)框架,该框架需要手动设计的依赖项,并且可能会过度简化用户行为。最近,提出了基于神经网络的方法来通过增强表达能力并允许灵活的依赖性来提高用户行为的预测准确性。但是,他们仍然遭受数据稀疏性和冷启动问题的困扰。在本文中,我们提出了一个新颖的图形增强点击模型(GraphCM),用于Web搜索。首先,我们将每个查询或文档视为顶点,并分别针对查询和文档提出新颖的均匀图构造方法,以完全利用会议内和会议间信息,以解决稀疏性和冷启动问题。其次,在考试假设之后,我们分别对吸引力估计量和检查预测值进行了建模,以输出吸引力得分和检查概率,在该分数中,应用图形神经网络和邻居相互作用技术用于提取在预构建的同质图中编码的辅助信息。最后,我们将组合功能应用于将考试概率和吸引力得分整合到点击预测中。在三个现实世界会话数据集上进行的广泛实验表明,GraphCM不仅胜过了最先进的模型,而且还可以在解决数据稀疏性和冷启动问题方面取得卓越的性能。
translated by 谷歌翻译
犯罪预测对于公共安全和资源优化至关重要,但由于两个方面而言,这是非常具有挑战性的:i)犯罪活动的刑事模式的动态,犯罪事件在空间和时间域之间不均匀分布; ii)延时依赖于不同类型的犯罪(例如,盗窃,抢劫,攻击,损害),其揭示了犯罪的细粒度语义。为了解决这些挑战,我们提出了空间时间顺序超图网络(ST-SHN),以集体编码复杂的犯罪空间模式以及潜在的类别明智犯罪语义关系。具体而言,在长期和全局上下文下处理空间 - 时间动态,我们设计了一个具有超图学习范例的集成的图形结构化消息传递架构。为了在动态环境中捕获类别方面的犯罪异构关系,我们介绍了多通道路由机制,以了解犯罪类型的时间不断发展的结构依赖性。我们对两个现实世界数据集进行了广泛的实验,表明我们所提出的ST-SHN框架可以显着提高与各种最先进的基线相比的预测性能。源代码可用于:https://github.com/akaxlh/st-hn。
translated by 谷歌翻译
许多以前的研究旨在增加具有深度神经网络技术的协同过滤,以实现更好的推荐性能。但是,大多数现有的基于深度学习的推荐系统专为建模单数类型的用户项目交互行为而设计,这几乎无法蒸馏用户和项目之间的异构关系。在实际推荐方案中,存在多重的用户行为,例如浏览和购买。由于用户的多行为模式在不同的项目上俯视,现有推荐方法不足以捕获来自用户多行为数据的异构协作信号。灵感灵感来自图形神经网络的结构化数据建模,这项工作提出了一个图形神经多行为增强建议(GNMR)框架,其明确地模拟了基于图形的消息传递体系结构下不同类型的用户项目交互之间的依赖性。 GNMR向关系聚合网络设计为模拟交互异质性,并且通过用户项交互图递归地执行相邻节点之间的嵌入传播。实体世界推荐数据集的实验表明,我们的GNMR始终如一地优于最先进的方法。源代码可在https://github.com/akaxlh/gnmr中获得。
translated by 谷歌翻译
损失函数在培训基于网络的对象探测器方面发挥着重要作用。对象检测的最广泛使用的评估度量是平均精度(AP),其同时捕获本地化和分类子任务的性能。然而,由于AP度量标准的非可分性性质,传统的对象探测器采用两个子任务采用单独的可分散损耗。这种错误对齐问题可能会导致性能下降。为了解决这个问题,现有的作品寻求手动设计AP公制的代理损失,这需要专业知识,并且可能仍可能是次优。在本文中,我们提出了参数化的AP损耗,其中引入参数化功能以替换AP计算中的非微弱组件。因此,不同的AP近似由统一公式中的参数化函数系列表示。然后采用自动参数搜索算法来搜索最佳参数。具有三种不同对象探测器的CoCo基准的广泛实验(即,RetinAnet,更快的R-CNN和可变形DETR)表明,所提出的参数化AP损耗始终如一地优于现有的手工损失。代码在https://github.com/fundamentalvision/parameterized-ap-loss发布。
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
When using LiDAR semantic segmentation models for safety-critical applications such as autonomous driving, it is essential to understand and improve their robustness with respect to a large range of LiDAR corruptions. In this paper, we aim to comprehensively analyze the robustness of LiDAR semantic segmentation models under various corruptions. To rigorously evaluate the robustness and generalizability of current approaches, we propose a new benchmark called SemanticKITTI-C, which features 16 out-of-domain LiDAR corruptions in three groups, namely adverse weather, measurement noise and cross-device discrepancy. Then, we systematically investigate 11 LiDAR semantic segmentation models, especially spanning different input representations (e.g., point clouds, voxels, projected images, and etc.), network architectures and training schemes. Through this study, we obtain two insights: 1) We find out that the input representation plays a crucial role in robustness. Specifically, under specific corruptions, different representations perform variously. 2) Although state-of-the-art methods on LiDAR semantic segmentation achieve promising results on clean data, they are less robust when dealing with noisy data. Finally, based on the above observations, we design a robust LiDAR segmentation model (RLSeg) which greatly boosts the robustness with simple but effective modifications. It is promising that our benchmark, comprehensive analysis, and observations can boost future research in robust LiDAR semantic segmentation for safety-critical applications.
translated by 谷歌翻译